I Know This Much is True: Thoughts on AI Hallucinations

AI is amazing. For example, it’s revolutionizing search so you can find stuff faster and more efficiently than ever before. Like in 2023, when someone asked Google’s Bard for some cool things about the James Webb telescope he could tell his 9-year-old, and right away it reported that the telescope took the very first picture of a planet outside of our solar system. Cool, right? And at the other end of the spectrum, in 2022, when a researcher was digging into papers on Meta’s scientific-specific AI platform Galactica, he was able to find a citation for a paper on 3D human avatars by Albert Pumarola.

Unfortunately, both of these results were bullshit.

The first picture of a planet outside our solar system happened 17 years before the James Webb telescope was launched, and while Albert Pumarola is a real research scientist, he never wrote the paper Galactica said he did.

So what the hell is going on?

Both of these are cases of “hallucinations” – stuff that AI just gets wrong. And while those two examples come from LLMs (“Large Language Models” – “text based platforms” to the rest of us), they also happen – with spectacular results – in image-based generators like Midjourney (check this horror show out). But right now, let’s stay focused on the LLMs, just to keep us from losing our minds a little.

And let’s start by reminding ourselves what AI really is: a feedback loop that generates the “next most likely answer” based on the patterns it sees in the data you’re exposing it to. So, hallucinations (also called “confabulations” by the way) occur because, as Ben Lutkevich at TechTarget.com writes, “LLMs have no understanding of the underlying reality that language describes.” Which, interestingly enough, is fundamentally not how humans understand language. As Khari Johnson writes in Wired.com (as reprinted in Arstechnica.com):

UC Berkeley psychology professor Alison Gopnik studies how toddlers and young people learn, to apply that understanding to computing. Children, she said, are the best learners, and the way kids learn language stems largely from their knowledge of and interaction with the world around them. Conversely, large language models have no connection to the world, making their output less grounded in reality.

In other words, for humans, language – words, etc. – represent things in the real world. But for LLMs, words are just the elements in the patterns that they see in the data. Which, yeah, they are, but for humans, those “patterns” are in service of something called “meaning”, which for LLMs they’re not. They’re just patterns. And because patterns are a significant part of language, when AI platforms replicate them in answers back to us, their results sound believable. Like the telescope thing. Or the scientific citation.

But I also think there’s another reason why they work on us. We’re sort of pre-programmed to believe them just because we asked the question.

Think of it this way. If you’re looking for information about something, in a sense, you’ve created a pattern in your head for which you are seeking some sort of reinforcement – that is, an answer that fits into the pattern of your thinking that generated the question. Like the telescope example above – one could assume from the question that the person already has some awareness of the telescope and its abilities. Perhaps they’d read this article in Smithsonian magazine about seven amazing discoveries it had already made - but felt that the article was too esoteric for a nine-year-old. The point is, they had an expectation which is, I think, a sort of form of pattern. So when the LLM provided an answer, it plugged very neatly into that pattern, creating an aura of truth around something that was fundamentally false.

And in a sense, this is not new news. Because as every grifter will tell you, for a con to succeed, you gotta get the mark to do at least half the work. And where AI hallucinations are concerned, we sort of are.

So, hallucinations are bad and we have to be on our guard against them because they will destroy AI and us, right?

Well, no, not exactly. In fact, they may actually be a good thing.

“Hallucinations” says Tim Hwang, who used to be the global public policy lead for AI at Google, “are a feature, not a bug.”

Wait, what?

At the BRXND conference this past May, Tim used the metaphor of smartphones to explain what he meant. First, he reminded us, smartphones existed. Then, he explained, a proper UX was developed to not only use them effectively, but to take advantage of their unique capabilities, capabilities that revolutionized the way we think about phones, communicating, everything. Tim believes we’re in a similar, sort of “pre-smartphone-UX” stage with AI, and that because our interfaces for it are extremely crude, we’re getting hallucinations. Or perhaps said another way, the hallucinations are telling us that we’re using AI wrong, they’re just not telling us how to use it right yet.

This “using it wrong/using it right” idea got me thinking as I plowed through some of the literature around hallucinations and came across this from Shane Orlick, president of writing tool Jasper.AI (formerly “Jarvis”) in a piece by Matt O’Brien in APNews:

“Hallucinations are actually an added bonus,” Orlick said. “We have customers all the time that tell us how it came up with ideas — how Jasper created takes on stories or angles that they would have never thought of themselves.”

Now sure, this could just be a company president looking at the hallucinations his AI is generating as a glass half full, as it were. But it got me thinking about the idea of creativity – that in a sense, hallucinations are creativity. They may not be the factual answers you were looking for, but they’ve used those factual answers to springboard to something new. You know, like creativity does.

I mean, who among us has not sat in a brainstorm and come up with some wild idea and had someone else in the room say “well, yeah, that makes sense, except for this and this and this” (just me? Oh…). How is that different from the hallucinations we started this essay with? “Yeah, that James Webb Telescope fact makes sense because an exoplanet is the kind of thing it would see, but it didn’t take the first picture of one because of this and this and this.”

And better yet how many times have you sat in brainstorms and someone came up with an idea that wasn’t perfect, but that was great nonetheless, and that the team was able to massage and adjust to make it perfect? Why couldn’t you do that with AI Hallucinations?

Could the path forward be, not the elimination of hallucinations, but the ability to choose between outputs that were proven, documented facts and outputs that were creativity based on proven, documented facts? Two functions serving two needs, but resident in one place. In much the same way that in the early days of the internet, we had to wrap our heads around the idea that sometimes we went to a website for facts and information, and sometimes we went to play (and sometimes we went for both. Okay, forget that last example).

Now look, I could be completely wrong about all of this. About hallucinations, about telescopes, about what Tim Hwang meant, about the nature of creativity, about the early days of the internet, about all of it. But it would seem to me that inquiry, even one as faulty as mine, is likely the best path to untangling AI, especially in early days like this and especially as we encounter challenges like these. Or said another way:

“The phenomenon of AI hallucinations offers a fascinating glimpse into the complexities of both artificial and human intelligence. Such occurrences challenge our understanding of creativity and logic, encouraging us to probe deeper into the mechanics of thought. However, we must approach this new frontier with a critical and ethical perspective, ensuring that it serves to enhance human understanding rather than obscure or diminish it.”

You know who said that? Albert Einstein. At least according to the internet. And he was pretty smart so that made me feel much better about hallucinations. And you should too. I think.